20 research outputs found

    Energy-aware Graph Job Allocation in Software Defined Air-Ground Integrated Vehicular Networks

    Full text link
    The software defined air-ground integrated vehicular (SD-AGV) networks have emerged as a promising paradigm, which realize the flexible on-ground resource sharing to support innovative applications for UAVs with heavy computational overhead. In this paper, we investigate a vehicular cloud-assisted graph job allocation problem in SD-AGV networks, where the computation-intensive jobs carried by UAVs, and the vehicular cloud are modeled as graphs. To map each component of the graph jobs to a feasible vehicle, while achieving the trade-off among minimizing UAVs' job completion time, energy consumption, and the data exchange cost among vehicles, we formulate the problem as a mixed-integer non-linear programming problem, which is Np-hard. Moreover, the constraint associated with preserving job structures poses addressing the subgraph isomorphism problem, that further complicates the algorithm design. Motivated by which, we propose an efficient decoupled approach by separating the template (feasible mappings between components and vehicles) searching from the transmission power allocation. For the former, we present an efficient algorithm of searching for all the subgraph isomorphisms with low computation complexity. For the latter, we introduce a power allocation algorithm by applying convex optimization techniques. Extensive simulations demonstrate that the proposed approach outperforms the benchmark methods considering various problem sizes.Comment: 14 pages, 7 figure

    Learning-Based Client Selection for Federated Learning Services Over Wireless Networks with Constrained Monetary Budgets

    Full text link
    We investigate a data quality-aware dynamic client selection problem for multiple federated learning (FL) services in a wireless network, where each client offers dynamic datasets for the simultaneous training of multiple FL services, and each FL service demander has to pay for the clients under constrained monetary budgets. The problem is formalized as a non-cooperative Markov game over the training rounds. A multi-agent hybrid deep reinforcement learning-based algorithm is proposed to optimize the joint client selection and payment actions, while avoiding action conflicts. Simulation results indicate that our proposed algorithm can significantly improve training performance.Comment: 6 pages,8 figure

    Seamless Service Provisioning for Mobile Crowdsensing: Towards Integrating Forward and Spot Trading Markets

    Full text link
    The challenge of exchanging and processing of big data over Mobile Crowdsensing (MCS) networks calls for the new design of responsive and seamless service provisioning as well as proper incentive mechanisms. Although conventional onsite spot trading of resources based on real-time network conditions and decisions can facilitate the data sharing over MCS networks, it often suffers from prohibitively long service provisioning delays and unavoidable trading failures due to its reliance on timely analysis of complex and dynamic MCS environments. These limitations motivate us to investigate an integrated forward and spot trading mechanism (iFAST), which entails a new hybrid service trading protocol over the MCS network architecture. In iFAST, the sellers (i.e., mobile users with sensing resources) can provide long-term or temporary sensing services to the buyers (i.e., sensing task owners). iFast enables signing long-term contracts in advance of future transactions through a forward trading mode, via analyzing historical statistics of the market, for which the notion of overbooking is introduced and promoted. iFAST further enables the buyers with unsatisfying service quality to recruit temporary sellers through a spot trading mode, upon considering the current market/network conditions. We analyze the fundamental blocks of iFAST, and provide a case study to demonstrate its superior performance as compared to existing methods. Finally, future research directions on reliable service provisioning for next-generation MCS networks are summarized

    DISCO: Achieving Low Latency and High Reliability in Scheduling of Graph-Structured Tasks over Mobile Vehicular Cloud

    Full text link
    To effectively process data across a fleet of dynamic and distributed vehicles, it is crucial to implement resource provisioning techniques that provide reliable, cost-effective, and real-time computing services. This article explores resource provisioning for computation-intensive tasks over mobile vehicular clouds (MVCs). We use undirected weighted graphs (UWGs) to model both the execution of tasks and communication patterns among vehicles in a MVC. We then study low-latency and reliable scheduling of UWG asks through a novel methodology named double-plan-promoted isomorphic subgraph search and optimization (DISCO). In DISCO, two complementary plans are envisioned to ensure effective task completion: Plan A and Plan B.Plan A analyzes the past data to create an optimal mapping (α\alpha) between tasks and the MVC in advance to the practical task scheduling. Plan B serves as a dependable backup, designed to find a feasible mapping (β\beta) in case α\alpha fails during task scheduling due to unpredictable nature of the network.We delve into into DISCO's procedure and key factors that contribute to its success. Additionally, we provide a case study that includes comprehensive comparisons to demonstrate DISCO's exceptional performance in regards to time efficiency and overhead. We further discuss a series of open directions for future research

    Intelligent Caching for Mobile Video Streaming in Vehicular Networks with Deep Reinforcement Learning

    No full text
    Caching-enabled multi-access edge computing (MEC) has attracted wide attention to support future intelligent vehicular networks, especially for delivering high-definition videos in the internet of vehicles with limited backhaul capacity. However, factors such as the constrained storage capacity of MEC servers and the mobility of vehicles pose challenges to caching reliability, particularly for supporting multiple bitrate video streaming caching while achieving considerable quality of experience (QoE). Motivated by the above challenges, in this paper, we propose an intelligent caching strategy that takes into account vehicle mobility, time-varying content popularity, and backhaul capability to improve the QoE of vehicle users effectively. First, based on the mobile video mean opinion score (MV-MOS), we designed an average download percentage (ADP) weighted QoE evaluation model. Then, the video content caching problem is formulated as a Markov decision process (MDP) to maximize the ADP weighted MV-MOS. Owing to the prior knowledge of video content popularity and channel state information that may not be available at the road side unit in practical scenarios, we propose a deep reinforcement learning (DRL)-based caching strategy to solve the problem while achieving a maximum ADP weighted MV-MOS. To accelerate its convergence speed, we further integrate the prioritized experience replay, dueling, and double deep Q-network technologies, which improve the performance of DRL algorithm. Numerical results demonstrate that the proposed DRL-based caching strategy significantly improves QoE, and achieves better video delivery reliability compared to existing non-learning approaches
    corecore